4 . Multilayer perceptrons and back - propagation

ثبت نشده
چکیده

Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separable). Although the power of MLPs to represent functions has been recognized a long time ago, only since a learning algorithm for MLPs, backpropagation, has become available, have these kinds of networks attracted a lot of attention. The back-propagation algorithm is central to much current work on learning in neural networks. It was independently invented several times (e.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Training Multilayer Perceptrons with the Extende Kalman Algorithm

A large fraction of recent work in artificial neural nets uses multilayer perceptrons trained with the back-propagation algorithm described by Rumelhart et. a1. This algorithm converges slowly for large or complex problems such as speech recognition, where thousands of iterations may be needed for convergence even with small data sets. In this paper, we show that training multilayer perceptrons...

متن کامل

Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation...

متن کامل

Deep Big Multilayer Perceptrons for Digit Recognition

The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent advancement by others dates back 8 years (error rate 0.4%). Good old on-line back-propagation for plain multi-layer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark with a single MLP and 0.31% with a committee of seven MLP. All we...

متن کامل

Multilayer perceptron architectures for data compression tasks

Different kinds of Multilayer Perceptrons, using a back-propagation learning algonthm, have been used to perform data compression tasks. Depending upon the architecture and the type of problern learned to solve ( classification or auto-association), the networks provide different kinds of dimensionality reduction preserving different properties of the data space. Some experiments show that usmg...

متن کامل

Prospective Hardware Implementation of the Chir Neural Network Algorithm

I review the recently developed Choice of Internal Representations (CHIR) training algorithm for multi-layer perceptrons, with an emphasis on relevant properties for hardware implementation. A comparison to the common error back-propagation algorithm shows that there are potential advantages in realizing CHIR in hard-

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001